quake iii arena
Human-level performance in 3D multiplayer games with population-based reinforcement learning
Artificially intelligent agents are getting better and better at two-player games, but most real-world endeavors require teamwork. Jaderberg et al. designed a computer program that excels at playing the video game Quake III Arena in Capture the Flag mode, where two multiplayer teams compete in capturing the flags of the opposing team. The agents were trained by playing thousands of games, gradually learning successful strategies not unlike those favored by their human counterparts. Computer agents competed successfully against humans even when their reaction times were slowed to match those of humans. Reinforcement learning (RL) has shown great success in increasingly complex single-agent environments and two-player turn-based games. However, the real world contains multiple agents, each learning and acting independently to cooperate and compete with other agents.
Artificial intelligence learns teamwork in a deadly game of capture the flag
DeepMind's bots work in pairs to capture the opposing team's flag on indoor and outdoor maps in Quake III Arena. Human gamers know just how hard it is to win a new spin on the classic computer game Quake: In a mazelike arena, they must work with other players to capture floating flags--all while dodging deadly gunfire. Now, for the first time, artificial intelligence (AI) has mastered teamwork in a complex first-person video game, coordinating its actions with both human and computer teammates to consistently beat opponents. "The scale of the experiments is remarkable," says Michael Littman, an AI expert at Brown University. Getting AI agents to work together is incredibly tough, he says. Although AI can drive cars and easily defeat the world's greatest chess and Go players one on one, researchers have struggled to get it to master teamwork.
DeepMind's AI can defeat human players in Quake III Arena's Capture the Flag mode
Few games are simpler in principle than capturing the flag (excepting perhaps tag or kick the can). Two teams each have a marker located at their respective bases, and the objective is to capture the other team's marker and return it safely back to their base. What's easily understood by humans is not quite so quickly grasped by machines, though. Where capture the flag is concerned in the video game domain, non-player characters have traditionally been programmed with heuristics and rules affording limited freedom in choice. But AI and machine learning promise to turn this paradigm on its head. In a paper published this week in the journal Science roughly a year following the preprint, researchers at DeepMind, the London-based subsidiary of Google parent company Alphabet, describe a system capable not only of learning how to play capture the flag in Id Software's Quake III Arena, but of devising entirely novel human-level team-based strategies.
Rise of the machines: AI thrashes humans in multiplayer shooter 'Quake III Arena'
WASHINGTON - It's official: the machines are going to destroy you (if, that is, you're a professional gamer). A team of programmers at a British artificial intelligence company has designed automated "agents" that taught themselves how to play the seminal first-person shooter "Quake III Arena," and became so good they consistently beat human opponents. The work of the researchers from DeepMind, which is owned by Google's parent company, Alphabet Inc., was described in a paper published in Science on Thursday and marks the first time the feat has ever been accomplished. To be sure, computers have been proving their dominance over humans in one-on-one turn-based games such as chess ever since IBM's Deep Blue beat Garry Kasparov in 1997. More recently, a Google AI agent beat the world's No. 1 go player in 2017.
Capture the Flag: the emergence of complex cooperative agents DeepMind
Above: four of our trained agents play together on an indoor and outdoor procedurally generated Capture the Flag level. Billions of people inhabit the planet, each with their own individual goals and actions, but still capable of coming together through teams, organisations and societies in impressive displays of collective intelligence. This is a setting we call multi-agent learning: many individual agents must act independently, yet learn to interact and cooperate with other agents. This is an immensely difficult problem - because with co-adapting agents the world is constantly changing. To investigate this problem we look at 3D first-person multiplayer video games.
A.I. players master 'Quake III Arena,' manage to outperform humans
Those among us who fear that we've already passed the point of no return when it comes to artificial intelligence becoming self-aware and plotting to murder the human race will likely cite A.I. research company DeepMind's latest experiment as further proof of that notion. Using Id Software's Quake III Arena, DeepMind has managed to train artificial players to be even more effective than their human counterparts. The challenge for DeepMind was not to see if its A.I. agents could defeat human players in battle, but rather if they could work together on procedurally generated levels to complete an objective -- in this case, capture the flag. Because the levels' structure changes each time they play, the agents are unable to simply memorize locations in order to make it to the flag. This forced them to actually learn the strategies needed to win in a similar manner to how human players might improve at the game.
How Alphabet's DeepMind used a 1999 video game to teach its AI teamwork
DeepMind, an Alphabet subsidiary, announced Tuesday its efforts to create an artificial intelligence (AI) that functions with human-like performance. Using Quake III Arena's Capture the Flag (CTF), a 3D first-person multiplayer video game, DeepMind taught AI how to work with and against humans to win the game. The rules of CTF haven't changed much since gym class. Two groups of individuals band together to steal the opponent's flag from their territory while also protecting their own. Teams can tag opponents that enter their territory, sending them back to their respective home base.